Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-objective alignment
# Multi-objective alignment
Gpt2 Large Helpful Reward Model
MIT
A GPT2 large model trained on the Anthropic/hh-rlhf helpfulness dataset, specifically designed for helpful response detection or RLHF (Reinforcement Learning from Human Feedback).
Large Language Model
Transformers
G
Ray2333
2,935
11
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase